RESEARCH & DEVELOPMENT (R&D)
Silicon Interfaces® teams have undertaken Research and Development in diverse areas. like Power Verification™ - UPVM© - Low Power Class-based Library, Scaling down Fault Simulation cycle time using Artificial Intelligence and Machine Learning modeling techniques, PCIe Race Condition resolution with Data Integrity/Security using Memory Manager and Fabric Matrix. These research areas have been undertaken by Silicon Interfaces Research Engineers to solve specific problems in domains of early stage Low Power Verification, application AI/ML to Fault Simulation, Data Integrity/Security as well multi-Core ALU enhcancements using Vedic Multipliers. The Research findings have been presented in several Conferences in North America, Europe, and Asia Pacific:
Scaling down Fault Simulation cycle time using Artificial Intelligence and Machine Learning modeling techniques
- Introduction
- In the rapidly advancing field of electronic design, ensuring that circuits and systems work correctly is a major challenge. Traditionally, engineers use specific testing methods, like Stuck@0 and Stuck@1, to find faults in their designs. These methods simulate potential errors in the circuits, helping identify problems.
- To improve this process, our paper introduces a new approach and method that blends traditional testing methods with modern fifth-generation Artificial Intelligence (AI) techniques, specifically using machine learning (ML). This combination aims to make testing faster and more accurate, reducing the need for exhaustive testing and saving valuable time.
- At first we do a model with up to 80% of data tuned using hyper-parameters, activation functions, optimizers, Etc. We check how well the AI model works by testing it against the remaining 20% of the data.
- Once we have a robust model, we can use that model to predict the outputs for our input vectors, which would be matched with the golden test suite to identify specific failure cases.
- Our innovative method, combining AI with traditional simulation, not only speeds up the testing process but also reduces the reliance on extensive manual testing. This paper explores how this new approach can help electronics manufacturers more efficiently verify their designs and get products to market faster, with fewer errors.
- Remodeling
- Stuck-at faults were generated on a DUT by forcing the signals to be logically high (1) and low (0). By using a traditional EDA simulator the output Y results we redirected to a file along with input vectors
- Stuck@0 faults simulate conditions where a signal is fixed at 0, regardless of the intended logical value.
- Stuck@1 faults simulate conditions where a signal is fixed at 1.
- Using traditional EDA tools, we “simulated” Stuck@0 and Stuck@1 faults at critical points in the PCIe design. Each simulation run generated a set of data reflecting the behavior of the system under these fault conditions.
- This data includes details such as signal integrity, timing discrepancies, and any resultant errors in the operation of the PCIe interface.
- The raw data from these simulations often contains redundancy and inconsistencies, which can obscure meaningful insights. Therefore, we undertook the following steps:
- Redundancy Removal: Duplicate entries and repetitive data points that did not contribute new information to the analysis were removed. This step ensures that the dataset is streamlined and focuses on unique fault impacts.
- Inconsistency Removal: Inconsistencies, such as contradictory results or data points arising from transient simulation artifacts, were identified and rectified or removed. This step ensures the reliability and accuracy of the data
- Model Building
- Next Run
- After predicting 80% of the data, we will compare the predicted values with the actual values of the same data set 20% test data to assess the accuracy of the predictions.
- The predicted data matched the actual data for this 20% as well, these predictions are correct and reliable for this data set.
- Next, once we have a robust model, we can use it to predict the outputs for our input vectors. These predicted outputs are then matched against the golden test suite, which contains the correct or expected outputs. By comparing the model's predictions with the golden test suite, we can identify specific failure cases where the model’s predictions move away from the expected results. These failure cases highlight areas where the DUT is diverging, and we can analyze these discrepancies to understand the reasons behind them—such as DD, PD, ND, HA, IA, NO, NC, NT, UU, UB, Etc
- Summary
- Our study confirms that integrating machine learning (ML) with traditional fault simulation enhances the validation process for electronic designs.
- Using visual tools like heat maps for performance assessment makes the process more intuitive and transparent. This approach represents a significant advancement in electronic design automation (EDA), indicating a promising future for AI-driven methods in streamlining and optimizing design validation
- The limitation that may be considered is that the AI/ML model should take significantly less time to give the results and the need to ensure that the AI processor usage is not becoming more expensive than the distributed parallel computers it is seeking to replace.
- Future work should be to make one AI/ML model that can “fit” all designs and this may be undertaken by building the best-of-breed techniques to have a model that can be “generically” used but specific modules may still be required for complex designs.
- The subject matter described here in subject to copyrights and patentable ideas and it is recommend that you click here to contact Silicon Interfaces for more information and an NDA for “Scaling down Fault Simulation cycle time using Artificial Intelligence and Machine Learning modeling techniques”